The Search Revolution: How AI Is Rethinking How We Find Information
In the digital age, the ability to search for information has become second nature—but according to the recent article “In Search of Better Search” in Communications of the ACM (CACM), we’re on the cusp of a new, more disruptive phase. The piece, penned by Samuel Greengard and published October 28, 2025, argues that large language models (LLMs) and generative AI are not just improving search—they’re fundamentally reshaping what search means. (Communications of the ACM)
A Brief Walk through the Evolution of Search
Search engines have come a long way. The first tool, Archie, appeared around 1990 and simply indexed FTP server files. (Communications of the ACM) As the Web emerged publicly in 1995, search engines like AltaVista, Lycos, Excite and others proliferated—but they mostly used keyword matching and lacked strong methods to assess source credibility. (Communications of the ACM) Then came PageRank in 1998 (with Google LLC) which introduced link‑based ranking, signalling a turning point for search quality. (Communications of the ACM) Over the past 25 years, Google has held the lion’s share of search engine dominance, layering innovations such as semantic search, unified models, and SEO incentives. (Communications of the ACM)
Enter AI Search: Promise and Peril
Now, we’re seeing a seismic shift: generative AI and large language models (LLMs) like ChatGPT, Gemini, Claude and Perplexity are transforming how we engage with information. According to a study by investment‑banking firm Evercore, adoption of AI in search could hit 88% by 2028. (Communications of the ACM)
These systems deliver conversational, natural‑language answers to complex questions—bypassing the classic “list of links” model: “How to cook the perfect omelet?”, “Which towns to visit on the Amalfi Coast?”—check. The benefit: speed, relevance, ease. (Communications of the ACM)
But there’s a trade‑off. The article raises concerns that AI search might misdirect us, reduce our engagement with sources, and make us more vulnerable to misinformation. Two major areas of concern stand out:
-
Accuracy & Trustworthiness The article cites research from the Tow Center for Digital Journalism (Columbia University) showing 60% of chatbot responses to test questions were at least somewhat incorrect. (Communications of the ACM) Worse yet, systems can hallucinate, invent citations or fabricate links, even with external source references. (Communications of the ACM) Because LLMs generate content by predicting likely words—not by true comprehension—they’re structurally vulnerable to errors. (Communications of the ACM)
-
Impact on Critical Thinking & Web Ecosystem The depth of human exploration could suffer. The article warns that shifting from “search engine → webpage dive” to “chatbot answer” may reduce users’ exposure to diverse perspectives and weaken search literacy. (Communications of the ACM) It also suggests the Web’s ecosystem of original content suffers if fewer people click through to websites and write new pages. (Communications of the ACM)
What’s Next? Hybrid Models and the Path Forward
The article proposes a balanced future: traditional search + generative AI hybrid systems. For example, forward‑thinking search providers are already combining AI summaries with links—so you get an overview and can verify via original sources. (Communications of the ACM)
Additionally, alternative frameworks may emerge: small language models (SLMs) and agentic search systems (which plan, reason, interact) could reduce energy consumption and offer more transparency. (Communications of the ACM) Commercial pressures—advertising, monetization, ranking incentives—also complicate matters: when profit motives dominate, the integrity of search results may degrade. (Communications of the ACM)
Why It Matters to You (and Me)
For professionals in AI/data science (like yourself, Sheng), this article underscores key themes:
- The design of search systems will become a major frontier: relevance, credibility, transparency, summarization, user agency.
- When building or using AI‑powered systems (you’re involved in smart email processing, NLP, etc.), the trade‑offs between convenience and verifiability matter.
- Understanding how users engage with information—whether via clicks or conversation—is crucial for modelling behaviour and designing interfaces.
- As AI search proliferates, industries around content creation, SEO, and information retrieval may shift dramatically.
In short: this is not just “search improved”—it’s “search reinvented”, with profound implications for technology strategy, user experience, and information ecosystems.
Glossary
- Large Language Model (LLM): A machine‑learning model, typically based on transformer architecture, trained on vast volumes of text data to predict words and generate human‑like text responses.
- Generative AI (GenAI): AI systems that can create content—text, images, audio, or video—rather than only analyzing or classifying it.
- Retrieval‑Augmented Generation (RAG): A technique where an LLM is supplemented by external data sources—such as web pages or databases—via retrieval mechanisms, then generates responses informed by those retrieved chunks. (Communications of the ACM)
- Semantic Search: A search process that attempts to understand user intent and contextual meaning (rather than exact keyword matching), often by representing words/sentences in vector space.
- Hallucination (in AI): The phenomenon where a model generates plausible‑sounding but incorrect or fictitious information.
- Agentic Search: A proposed search paradigm where the system acts more like an agent—it plans, reasons, interacts with tools/data, rather than passively returning links or summaries. (Communications of the ACM)
Final Word
The article by Samuel Greengard paints a compelling portrait of the search transformation underway: what we once navigated via lists of links may soon be served up via conversational AI. The promise is alluring—faster, more tailored answers—but the risks are real—less transparency, weaker critical thinking, more misinformation. As we embrace these tools, we must also scrutinize their design, guard our information literacy, and build for a future where quality, reliability, and user agency remain central.
Source link: https://cacm.acm.org/news/in-search-of-better-search/